190 research outputs found
Convolutional Neural Fabrics
Despite the success of CNNs, selecting the optimal architecture for a given
task remains an open problem. Instead of aiming to select a single optimal
architecture, we propose a "fabric" that embeds an exponentially large number
of architectures. The fabric consists of a 3D trellis that connects response
maps at different layers, scales, and channels with a sparse homogeneous local
connectivity pattern. The only hyper-parameters of a fabric are the number of
channels and layers. While individual architectures can be recovered as paths,
the fabric can in addition ensemble all embedded architectures together,
sharing their weights where their paths overlap. Parameters can be learned
using standard methods based on back-propagation, at a cost that scales
linearly in the fabric size. We present benchmark results competitive with the
state of the art for image classification on MNIST and CIFAR10, and for
semantic segmentation on the Part Labels dataset.Comment: Corrected typos (In proceedings of NIPS16
Auxiliary Guided Autoregressive Variational Autoencoders
Generative modeling of high-dimensional data is a key problem in machine
learning. Successful approaches include latent variable models and
autoregressive models. The complementary strengths of these approaches, to
model global and local image statistics respectively, suggest hybrid models
that encode global image structure into latent variables while autoregressively
modeling low level detail. Previous approaches to such hybrid models restrict
the capacity of the autoregressive decoder to prevent degenerate models that
ignore the latent variables and only rely on autoregressive modeling. Our
contribution is a training procedure relying on an auxiliary loss function that
controls which information is captured by the latent variables and what is left
to the autoregressive decoder. Our approach can leverage arbitrarily powerful
autoregressive decoders, achieves state-of-the art quantitative performance
among models with latent variables, and generates qualitatively convincing
samples.Comment: Published as a conference paper at ECML-PKDD 201
Weakly Supervised Object Localization with Multi-fold Multiple Instance Learning
Object category localization is a challenging problem in computer vision.
Standard supervised training requires bounding box annotations of object
instances. This time-consuming annotation process is sidestepped in weakly
supervised learning. In this case, the supervised information is restricted to
binary labels that indicate the absence/presence of object instances in the
image, without their locations. We follow a multiple-instance learning approach
that iteratively trains the detector and infers the object locations in the
positive training images. Our main contribution is a multi-fold multiple
instance learning procedure, which prevents training from prematurely locking
onto erroneous object locations. This procedure is particularly important when
using high-dimensional representations, such as Fisher vectors and
convolutional neural network features. We also propose a window refinement
method, which improves the localization accuracy by incorporating an objectness
prior. We present a detailed experimental evaluation using the PASCAL VOC 2007
dataset, which verifies the effectiveness of our approach.Comment: To appear in IEEE Transactions on Pattern Analysis and Machine
Intelligence (TPAMI
A robust and efficient video representation for action recognition
This paper introduces a state-of-the-art video representation and applies it
to efficient action recognition and detection. We first propose to improve the
popular dense trajectory features by explicit camera motion estimation. More
specifically, we extract feature point matches between frames using SURF
descriptors and dense optical flow. The matches are used to estimate a
homography with RANSAC. To improve the robustness of homography estimation, a
human detector is employed to remove outlier matches from the human body as
human motion is not constrained by the camera. Trajectories consistent with the
homography are considered as due to camera motion, and thus removed. We also
use the homography to cancel out camera motion from the optical flow. This
results in significant improvement on motion-based HOF and MBH descriptors. We
further explore the recent Fisher vector as an alternative feature encoding
approach to the standard bag-of-words histogram, and consider different ways to
include spatial layout information in these encodings. We present a large and
varied set of evaluations, considering (i) classification of short basic
actions on six datasets, (ii) localization of such actions in feature-length
movies, and (iii) large-scale recognition of complex events. We find that our
improved trajectory features significantly outperform previous dense
trajectories, and that Fisher vectors are superior to bag-of-words encodings
for video recognition tasks. In all three tasks, we show substantial
improvements over the state-of-the-art results
Learning Disentangled Representations with Reference-Based Variational Autoencoders
Learning disentangled representations from visual data, where different
high-level generative factors are independently encoded, is of importance for
many computer vision tasks. Solving this problem, however, typically requires
to explicitly label all the factors of interest in training images. To
alleviate the annotation cost, we introduce a learning setting which we refer
to as "reference-based disentangling". Given a pool of unlabeled images, the
goal is to learn a representation where a set of target factors are
disentangled from others. The only supervision comes from an auxiliary
"reference set" containing images where the factors of interest are constant.
In order to address this problem, we propose reference-based variational
autoencoders, a novel deep generative model designed to exploit the
weak-supervision provided by the reference set. By addressing tasks such as
feature learning, conditional image generation or attribute transfer, we
validate the ability of the proposed model to learn disentangled
representations from this minimal form of supervision
Areas of Attention for Image Captioning
We propose "Areas of Attention", a novel attention-based model for automatic
image captioning. Our approach models the dependencies between image regions,
caption words, and the state of an RNN language model, using three pairwise
interactions. In contrast to previous attention-based approaches that associate
image regions only to the RNN state, our method allows a direct association
between caption words and image regions. During training these associations are
inferred from image-level captions, akin to weakly-supervised object detector
training. These associations help to improve captioning by localizing the
corresponding regions during testing. We also propose and compare different
ways of generating attention areas: CNN activation grids, object proposals, and
spatial transformers nets applied in a convolutional fashion. Spatial
transformers give the best results. They allow for image specific attention
areas, and can be trained jointly with the rest of the network. Our attention
mechanism and spatial transformer attention areas together yield
state-of-the-art results on the MSCOCO dataset.o meaningful latent semantic
structure in the generated captions.Comment: Accepted in ICCV 201
Coordinated Local Metric Learning
International audienceMahalanobis metric learning amounts to learning a linear data projection, after which the L2 metric is used to compute distances. To allow more flexible metrics, not restricted to linear projections, local metric learning techniques have been developed. Most of these methods partition the data space using clustering, and for each cluster a separate metric is learned. Using local metrics, however, it is not clear how to measure distances between data points assigned to different clusters. In this paper we propose to embed the local metrics in a global low-dimensional representation, in which the L2 metric can be used. With each cluster we associate a linear mapping that projects the data to the global representation. This global representation directly allows computing distances between points regardless to which local cluster they belong. Moreover, it also enables data visualization in a single view, and the use of L2 based efficient retrieval methods. Experiments on the Labeled Faces in the Wild dataset show that our approach improves over previous global and local metric learning approaches
Anytime Inference with Distilled Hierarchical Neural Ensembles
Inference in deep neural networks can be computationally expensive, and
networks capable of anytime inference are important in mscenarios where the
amount of compute or quantity of input data varies over time. In such networks
the inference process can interrupted to provide a result faster, or continued
to obtain a more accurate result. We propose Hierarchical Neural Ensembles
(HNE), a novel framework to embed an ensemble of multiple networks in a
hierarchical tree structure, sharing intermediate layers. In HNE we control the
complexity of inference on-the-fly by evaluating more or less models in the
ensemble. Our second contribution is a novel hierarchical distillation method
to boost the prediction accuracy of small ensembles. This approach leverages
the nested structure of our ensembles, to optimally allocate accuracy and
diversity across the individual models. Our experiments show that, compared to
previous anytime inference models, HNE provides state-of-the-art
accuracy-computate trade-offs on the CIFAR-10/100 and ImageNet datasets
Predicting Deeper into the Future of Semantic Segmentation
The ability to predict and therefore to anticipate the future is an important
attribute of intelligence. It is also of utmost importance in real-time
systems, e.g. in robotics or autonomous driving, which depend on visual scene
understanding for decision making. While prediction of the raw RGB pixel values
in future video frames has been studied in previous work, here we introduce the
novel task of predicting semantic segmentations of future frames. Given a
sequence of video frames, our goal is to predict segmentation maps of not yet
observed video frames that lie up to a second or further in the future. We
develop an autoregressive convolutional neural network that learns to
iteratively generate multiple frames. Our results on the Cityscapes dataset
show that directly predicting future segmentations is substantially better than
predicting and then segmenting future RGB frames. Prediction results up to half
a second in the future are visually convincing and are much more accurate than
those of a baseline based on warping semantic segmentations using optical flow.Comment: Accepted to ICCV 2017. Supplementary material available on the
authors' webpage
- …